32 research outputs found

    Instantaneous multi-sensor task allocation in static and dynamic environments

    Get PDF
    A sensor network often consists of a large number of sensing devices of different types. Upon deployment in the field, these sensing devices form an ad hoc network using wireless links or cables to communicate with each other. Sensor networks are increasingly used to support emergency responders in the field usually requiring many sensing tasks to be supported at the same time. By a sensing task we mean any job that requires some amount of sensing resources to be accomplished such as localizing persons in need of help or detecting an event. Tasks might share the usage of a sensor, but more often compete to exclusively control it because of the limited number of sensors and overlapping needs with other tasks. Sensors are in fact scarce and in high demand. In such cases, it might not be possible to satisfy the requirements of all tasks using available sensors. Therefore, the fundamental question to answer is: “Which sensor should be allocated to which task?", which summarizes the Multi-Sensor Task Allocation (MSTA) problem. We focus on a particular MSTA instance where the environment does not provide enough information to plan for future allocations constraining us to perform instantaneous allocation. We look at this problem in both static setting, where all task requests from emergency responders arrive at once, and dynamic setting, where tasks arrive and depart over time. We provide novel solutions based on centralized and distributed approaches. We evaluate their performance using mainly simulations on randomly generated problem instances; moreover, for the dynamic setting, we consider also feasibility of deploying part of the distributed allocation system on user mobile devices. Our solutions scale well with different number of task requests and manage to improve the utility of the network, prioritizing the most important tasks.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Conversational Sensing

    Full text link
    Recent developments in sensing technologies, mobile devices and context-aware user interfaces have made it possible to represent information fusion and situational awareness as a conversational process among actors - human and machine agents - at or near the tactical edges of a network. Motivated by use cases in the domain of security, policing and emergency response, this paper presents an approach to information collection, fusion and sense-making based on the use of natural language (NL) and controlled natural language (CNL) to support richer forms of human-machine interaction. The approach uses a conversational protocol to facilitate a flow of collaborative messages from NL to CNL and back again in support of interactions such as: turning eyewitness reports from human observers into actionable information (from both trained and untrained sources); fusing information from humans and physical sensors (with associated quality metadata); and assisting human analysts to make the best use of available sensing assets in an area of interest (governed by management and security policies). CNL is used as a common formal knowledge representation for both machine and human agents to support reasoning, semantic information fusion and generation of rationale for inferences, in ways that remain transparent to human users. Examples are provided of various alternative styles for user feedback, including NL, CNL and graphical feedback. A pilot experiment with human subjects shows that a prototype conversational agent is able to gather usable CNL information from untrained human subjects

    Instantaneous multi-sensor task allocation in static and dynamic environments

    Get PDF
    A sensor network often consists of a large number of sensing devices of different types. Upon deployment in the field, these sensing devices form an ad hoc network using wireless links or cables to communicate with each other. Sensor networks are increasingly used to support emergency responders in the field usually requiring many sensing tasks to be supported at the same time. By a sensing task we mean any job that requires some amount of sensing resources to be accomplished such as localizing persons in need of help or detecting an event. Tasks might share the usage of a sensor, but more often compete to exclusively control it because of the limited number of sensors and overlapping needs with other tasks. Sensors are in fact scarce and in high demand. In such cases, it might not be possible to satisfy the requirements of all tasks using available sensors. Therefore, the fundamental question to answer is: “Which sensor should be allocated to which task?", which summarizes the Multi-Sensor Task Allocation (MSTA) problem. We focus on a particular MSTA instance where the environment does not provide enough information to plan for future allocations constraining us to perform instantaneous allocation. We look at this problem in both static setting, where all task requests from emergency responders arrive at once, and dynamic setting, where tasks arrive and depart over time. We provide novel solutions based on centralized and distributed approaches. We evaluate their performance using mainly simulations on randomly generated problem instances; moreover, for the dynamic setting, we consider also feasibility of deploying part of the distributed allocation system on user mobile devices. Our solutions scale well with different number of task requests and manage to improve the utility of the network, prioritizing the most important tasks

    Human-machine conversations to support multi-agency missions

    Get PDF
    In domains such as emergency response, environmental monitoring, policing and security, sensor and information networks are deployed to assist human users across multiple agencies to conduct missions at or near the 'front line'. These domains present challenging problems in terms of human-machine collaboration: human users need to task the network to help them achieve mission objectives, while humans (sometimes the same individuals) are also sources of mission-critical information. We propose a natural language-based conversational approach to supporting humanmachine working in mission-oriented sensor networks. We present a model for human-machine and machine-machine interactions in a realistic mission context, and evaluate the model using an existing surveillance mission scenario. The model supports the flow of conversations from full natural language to a form of Controlled Natural Language (CNL) amenable to machine processing and automated reasoning, including high-level information fusion tasks. We introduce a mechanism for presenting the gist of verbose CNL expressions in a more convenient form for human users. We show how the conversational interactions supported by the model include requests for expansions and explanations of machine-processed information

    Human Factors in Intelligence, Surveillance, and Reconnaissance: Gaps for Soldiers and Technology Recommendations

    Full text link
    Abstract—We investigate the gaps for Soldiers in information collection and resource management for Intelligence, Surveillance, and Reconnaissance (ISR). ISR comprises the intelligence functions supporting military operations; we concentrate on ISR for physical sensors (air and ground platforms). To identify gaps, we use approaches from Human Factors (interactions between humans and technical systems to optimize human and system performance) at the level of Soldier functions/activities in ISR. Key gaps (e.g., the loud auditory signatures of some air assets, unofficial ISR requests, and unintended battlefield effects) are identified. These gaps illustrate that ISR is not purely a technical problem. Instead, interactions between technical systems, humans, and the environment result in unpredictability and adaptability in using technical systems. To mitigate these gaps, we provide technology recommendations. Keywords—intelligence, surveillance, and reconnaissance; ISR; human factors; human-systems integration; cognitive systems engineering; intelligenc

    Tasking and sharing sensing assets using controlled natural language

    Full text link
    We introduce an approach to representing intelligence, surveillance, and reconnaissance (ISR) tasks at a relatively high level in controlled natural language. We demonstrate that this facilitates both human interpretation and machine processing of tasks. More specically, it allows the automatic assignment of sensing assets to tasks, and the informed sharing of tasks between collaborating users in a coalition environment. To enable automatic matching of sensor types to tasks, we created a machine-processable knowledge representation based on the Military Missions and Means Framework (MMF), and implemented a semantic reasoner to match task types to sensor types. We combined this mechanism with a sensor-task assignment procedure based on a well-known distributed protocol for resource allocation. In this paper, we re-formulate the MMF ontology in Controlled English (CE), a type of controlled natural language designed to be readable by a native English speaker whilst representing information in a structured, unambiguous form to facilitate machine processing. We show how CE can be used to describe both ISR tasks (for example, detection, localization, or identication of particular kinds of object) and sensing assets (for example, acoustic, visual, or seismic sensors, mounted on motes or unmanned vehicles). We show how these representations enable an automatic sensor-task assignment process. Where a group of users are cooperating in a coalition, we show how CE task summaries give users in the eld a high-level picture of ISR coverage of an area of interest. This allows them to make ecient use of sensing resources by sharing tasks

    A system architecture for exploiting mission information requirement and resource allocation

    Full text link
    In a military scenario, commanders need to determine what kinds of information will help them execute missions. The amount of information available to support each mission is constrained by the availability of information assets. For example, there may be limits on the numbers of sensors that can be deployed to cover a certain area, and limits on the bandwidth available to collect data from those sensors for processing. Therefore, options for satisfying information requirements should take into consideration constraints on the underlying information assets, which in certain cases could simultaneously support multiple missions. In this paper, we propose a system architecture for modeling missions and allocating information assets among them. We model a mission as a graph of tasks with temporal and probabilistic relations. Each task requires some information provided by the information assets. Our system suggests which information assets should be allocated among missions. Missions are compatible with each other if their needs do not exceed the limits of the information assets; otherwise, feedback is sent to the commander indicating information requirements need to be adjusted. The decision loop will eventually converge and the utilization of the resources is maximized

    Broadcast scheduling with data bundles

    Full text link
    Broadcast scheduling has been extensively studied in wireless environments, where a base station broadcasts data to multiple users. Due to the sole wireless channel's limited bandwidth, only a subset of the needs may be satisfiable, and so maximizing total (weighted) throughput is a popular objective. In many realistic applications, however, data are dependent or correlated in the sense that the joint utility of a set of items is not simply the sum of their individual utilities. On the one hand, substitute data may provide overlapping information, so one piece of data item may have lower value if a second data item has already been delivered; on the other hand, complementary data are more valuable than the sum of their parts, if, for example, one data item is only useful in the presence of a second data item. In this paper, we define a data bundle to be a set of data items with possibly nonadditive joint utility, and we study a resulting broadcast scheduling optimization problem whose objective is to maximize the utility provided by the data delivered

    Utility-based joint sensor selection and congestion control for task-oriented WSNs

    Full text link
    Task-centric wireless sensor network environments are often characterized by the simultaneous operation of multiple tasks. Individual tasks compete for constrained resources and thus need resource mediation algorithms at two levels. First, different sensors must be allocated to different tasks based on the combination of sensor attributes and task requirements. Subsequently, sensor data rates on various data routes must be dynamically adapted to share the available wireless bandwidth, especially when links experience traffic congestion. In this paper we investigate heuristics for incrementally modifying the sensor-task matching process to incorporate changes in the transport capacity constraints or feasible task utility values

    Knowledge-Driven Agile Sensor-Mission Assignment

    Get PDF
    In this paper, we show how knowledge representation and reasoning techniques can support sensor-mission assignment, proceeding from a high-level specification of information requirements, to the allocation of assets such as sensors and platforms. In our previous work, we showed how assets can be matched to mission tasks by formalising the military missions and means framework in terms of an ontology, and using this ontology to drive a matchmaking process derived from the area of semantic Web services. The work reported here extends the earlier approach in two important ways: (1) by providing a richer and more realistic way for a user to specify their information requirements, and (2) by using the results of the semantic matchmaking process to define the search space for efficient asset allocation algorithms. We accomplish (1) by means of a rule-based representation of the NIIRS approach to relating sensed data to the tasks that data may support. We illustrate (2)by showing how the output of our matching process can drive a well-known efficient combinatorial auction algorithm (CASS). Finally, we summarise the status of our illustration-of-concept application, SAM (Sensor Assignment to Missions), and discuss various roles such an application can play in supporting sensormission assignment
    corecore